Key AI related terms to be aware of, with UK government explanations
[edit] Algorithm
A set of instructions used to perform tasks (such as calculations and data analysis) usually using a computer or another smart device. For further information see UK Posr Brief 57 'Artificial intelligence: An explainer'
[edit] Algorithmic bias
AI systems can have bias embedded in them, which can manifest through various pathways including biased training datasets or biased decisions made by humans in the design of algorithms.
For further information see UK Post Note 708 ('Policy implications of artificial intelligence (AI) and UK Post Note 633 Interpretable Machine Learning for further details.
[edit] Artificial intelligence (AI)
The UK Government’s 2023 policy paper on ‘A pro-innovation approach to AI regulation’ defined AI, AI systems or AI technologies as “products and services that are ‘adaptable’ and ‘autonomous’.” The adaptability of AI refers to AI systems, after being trained, often developing the ability to perform new ways of finding patterns and connections in data that are not directly envisioned by their human programmers. The autonomy of AI refers to some AI systems that can make decisions without the intent or ongoing control of a human.
[edit] Artificial general intelligence
Sometimes known as general AI, strong AI or broad AI, this often refers to a theoretical form of AI that can achieve human-level or higher performance across most cognitive tasks. See also Superintelligence.
[edit] Artificial neural network
A computer structure inspired by the biological brain, consisting of a large set of interconnected computational units (‘neurons’) that are connected in layers. Data passes between these units as between neurons in a brain. Outputs of a previous layer are used as inputs for the next, and there can be hundreds of layers of units. An artificial neural network with more than 3 layers is considered a deep learning algorithm. Examples of artificial neural networks include Transformers or Generative adversarial networks.
[edit] Automated decision-making
A term that the Office for AI, within the Department for Science, Innovation and Technology, refers to in an Ethics, Transparency and Accountability Framework for Automated decision-making as “both solely automated decisions (no human judgement involved) and automated assisted decision-making (assisting human judgement).” AI systems are increasingly being used by the public and private sector for automated decision-making.
For further information see UK Post Note 708 ('Policy implications of artificial intelligence (AI).
[edit] Compute
Compute is defined by the Independent Review of the Future of Compute as ‘the systems assembled at scale to tackle computational tasks beyond the capabilities of everyday computers. This includes both physical supercomputers and the use of cloud provision to tackle high computational loads.’ Compute is a driver of AI developments (UK Post Brief 57 'Artificial intelligence: An explainer').
[edit] Computer vision
This focuses on programming computer systems to interpret and understand images, videos and other visual inputs and take actions or make recommendations based on that information. Applications include object recognition, facial recognition, medical imaging analysis, navigation and video surveillance.
[edit] Deep learning
A subset of machine learning that uses artificial neural networks to recognise patterns in data and provide a suitable output, for example, a prediction (UK Post Brief 57 'Artificial intelligence: An explainer'). Deep learning is suitable for complex learning tasks, and has improved AI capabilities in tasks such as voice and image recognition, object detection and autonomous driving (UK Post Note 633 Interpretable Machine Learning).
[edit] Deepfakes
Pictures and video that are deliberately altered to generate misinformation and disinformation. Advances in generative AI have lowered the barrier for the production of deepfakes (UK Post Note 708 ('Policy implications of artificial intelligence (AI).
[edit] Disinformation
The UK Government defines disinformation as the “deliberate creation and spreading of false and/or manipulated information that is intended to deceive and mislead people, either for the purposes of causing harm, or for political, personal or financial gain”. Advances in generative AI have lowered the barrier for the production of disinformation, misinformation, and deepfakes (UK Post Note 708 ('Policy implications of artificial intelligence (AI)).
[edit] Educational technology
Technologies specifically developed to facilitate teaching and learning which may or may not encompass AI. See Use of artificial intelligence in education deliveryfor further details.
[edit] Fine-tuning
Fine-tuning a model involves developers training it further on a specific set of data to improve its performance for a specific application. For further information see UK Post Note 712 and UK Post Brief 57 'Artificial intelligence: An explainer')
[edit] Foundation models
A machine learning model trained on a vast amount of data so that it can easily be adapted for a wide range of general tasks, including being able to generate outputs (generative AI). See also large language models.
[edit] Frontier AI
Defined by the Government Office for Science as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models’. Currently, this primarily encompasses a few large language models including
- ChatGPT (OpenAI)
- Claude (Anthropic)
- and Bard (Google)
[edit] Generative AI
An AI model that generates text, images, audio, video or other media in response to user prompts. It uses machine learning techniques to create new data that has similar characteristics to the data it was trained on. Generative AI applications include chatbots, photo and video filters, and virtual assistants.
[edit] General-purpose AI
Often refers to AI models that can be adapted to a wide range of applications (such as Foundation Models). See also narrow AI.
[edit] Generative adversarial networks
Generative adversarial networks are a driver of recent AI developments (UK Post Brief 57 'Artificial intelligence: An explainer'). These are made up of two sub artificial neural networks: a generator network and a discriminator network. The generator network is fed training data and generates artificial data based on patterns in training data. The discriminator network compares the artificially generated data with the ‘real’ training data and feeds back to the generator network where it has detected differences. The generator then alters its parameters. Over time the generator network learns to generate more realistic data, until the discriminator network cannot tell what is artificial and what is ‘real’ training data and the AI model generates the desired outcomes. See also artificial neural networks and transformers.
[edit] Graphical processing units
These are similar to central processing units, found on a typical home computer. Graphical processing units have been used since the 1970s in gaming applications and have been designed to accelerate computer graphics and image processing. In the past decade, graphical processing units have been increasingly applied in the training of large machine learning models after they were found to be effective in processing large amounts of data in parallel. For more details see UK Post Brief 57 'Artificial intelligence: An explainer'.
[edit] Hallucinations
Large language models, such as ChatGPT, are unable to identify if the phrases they generate make sense or are accurate. This can sometimes lead to inaccurate results, also known as ‘hallucination’ effects, where large language models generate plausible sounding but inaccurate text. Hallucinations can also result from biases in training datasets or the model’s lack of access to up-to-date information (UK Post Brief 57 'Artificial intelligence: An explainer').
[edit] Interpretability
Some machine learning models, particularly those trained with deep learning, are so complex that it may be difficult or impossible to know how the model produced the output (UK Post Brief 57 'Artificial intelligence: An explainer', UK Post Note 633 Interpretable Machine Learning). Interpretability often describes the ability to present or explain a machine learning system’s decision-making process in terms that can be understood by humans (PN 633). Interpretability is sometimes referred to as transparency or explainability.
[edit] Large language models
A type of foundation model that is trained on vast amounts of text to carry out natural language processing tasks. During training phases, large language models learn parameters from factors such as the model size and training datasets. Parameters are then used by large language models to infer new content. Whilst there is no universally agreed figure for how large training datasets need to be, the biggest large language models (frontier AI) have been trained on billions or even trillions of bits of data. For example, the large language model underpinning ChatGPT 3.5 (released to the public in November 2022) was trained using 300 billion words obtained from internet text. See also natural language processing and foundation models.
[edit] Machine learning
A type of AI that allows a system to learn and improve from examples without all its instructions being explicitly programmed (UK Post Note 633 Interpretable Machine Learning). Machine learning systems learn by finding patterns in training datasets. They then create a model (with algorithms) encompassing their findings. This model is then typically applied to new data to make predictions or provide other useful outputs, such as translating text. Training machine learning systems for specific applications can involve different forms of learning, such as supervised, unsupervised, semi-supervised and reinforcement learning.
[edit] Misinformation
The UK Government defines misinformation as “the inadvertent spread of false information”. Advances in generative AI have lowered the barrier for the production of disinformation, misinformation, and deepfakes (UK Post Note 708 ('Policy implications of artificial intelligence).
[edit] Narrow AI
Sometimes known as weak AI, these AI models are designed to perform a specific task (such as speech recognition) and cannot be adapted to other tasks. See also general-purpose AI.
[edit] Natural language processing
This focuses on programming computer systems to understand and generate human speech and text. Algorithms look for linguistic patterns in how sentences and paragraphs are constructed and how words, context and structure work together to create meaning. Applications include speech-to-text converters, online tools that summarise text, chatbots, speech recognition and translations. See also large language models.
[edit] Open-source
Open-source often means the underlying code used to run AI models is freely available for testing, scrutiny and improvement (UK Post Brief 57 'Artificial intelligence: An explainer').
[edit] Reinforcement learning
A way of training machine learning systems for a specific application. An AI system is trained by being rewarded for following certain ‘correct’ strategies and punished if it follows the ‘wrong’ strategies. After completing a task, the AI system receives feedback, which can sometimes be given by humans (known as ‘reinforcement learning from human feedback’). In the feedback, positive values are assigned to ‘correct’ strategies to encourage the AI system to use them, and negative values are assigned to ‘wrong’ strategies to discourage them, with the classification of ‘correct’ and ‘wrong’ depending on a pre-established outcome. This type of learning is useful for tweaking an AI model to follow certain ‘correct’ behaviours, such as fine-tuning a chatbot to output a preferred style, tone or format of language (UK Post Brief 57 'Artificial intelligence: An explainer'). See also supervised learning, unsupervised learning and semi-supervised learning.
[edit] Responsible AI
Often refers to the practice of designing, developing, and deploying AI with certain values, such as being trustworthy, ethical, transparent, explainable, fair, robust and upholding privacy rights.
[edit] Robotics
Machines that are capable of automatically carrying out a series of actions and moving in the physical world. Modern robots contain algorithms that typically, but do not always, have some form of artificial intelligence. Applications include industrial robots used in manufacturing, medical robots for performing surgery, and self-navigating drones (UK Post Brief 57 'Artificial intelligence: An explainer').
[edit] Semi-supervised learning
A way of training machine learning systems for a specific application. An AI system uses a mix of supervised and unsupervised learning and labelled and unlabelled data. This type of learning is useful when it is difficult to extract relevant features from data and when there are high volumes of complex data, such as identifying abnormalities in medical images, like potential tumours or other markers of diseases. See also supervised learning, unsupervised learning, reinforcement learning and training datasets.
[edit] Superintelligence
A theoretical form of AI that has intelligence greater than humans and exceeds their cognitive performance in most domains. See also artificial general intelligence.
[edit] Supervised learning
A way of training machine learning systems for a specific application. In a training phase, an AI system is fed labelled data. The system trains from the input data, and the resulting model is then tested to see if it can correctly apply labels to new unlabelled data (such as if it can correctly label unlabelled pictures of cats and dogs accordingly). This type of learning is useful when it is clear what is being searched for, such as identifying spam mail. See also semi-supervised learning, unsupervised learning, reinforcement learning and training datasets.
[edit] Training datasets
The set of data used to train an AI system. Training datasets can be labelled (for example, pictures of cats and dogs labelled ‘cat’ or ‘dog’ accordingly) or unlabelled.
[edit] Transformers
Transformers have greatly improved natural language processing, computer vision and robotic capabilities and the ability of AI models to generate text (UK Post Brief 57 'Artificial intelligence: An explainer'). A transformer can read vast amounts of text, spot patterns in how words and phrases relate to each other, and then make predictions about what word should come next. This ability to spot patterns in how words and phrases relate to each other is a key innovation, which has allowed AI models using transformer architectures to achieve a greater level of comprehension than previously possible. See also artificial neutral networks and generative adversarial networks.
[edit] Unsupervised learning
A way of training machine learning systems for a specific application. An AI system is fed large amounts of unlabelled data, in which it starts to recognise patterns of its own accord. This type of learning is useful when it is not clear what patterns are hidden in data, such as in online shopping basket recommendations (“customers who bought this item also bought the following items”). See also semi-supervised learning, supervised learning and reinforcement learning and training datasets.
This article appears on the UK Government website as 'Artificial intelligence (AI) glossary' dated January 2024.
[edit] Related articles on Designing for Buildings
- AI building design tools
- Artificial intelligence and civil engineering.
- Artificial Intelligence and its impact on the project profession.
- Artificial intelligence and surveying.
- Artificial intelligence for smarter, safer buildings.
- Artificial intelligence in buildings.
- BSRIA publishes Artificial Intelligence in Buildings white paper.
- Building automation and control systems.
- Building information modelling.
- Computer aided design CAD.
- Computers in building design.
- Generative design.
- Global building automation.
- Internet of things.
- Parametric design.
- Predictive analytics.
- The impact of digital on civil engineering.
- The long expanding list of AI tools for building planning, design, construction and management.
- Will AI ever be able to design buildings?
Featured articles and news
CIOB launches global mental health survey
To address the silent mental health crisis in construction.
New categories in sustainability, health and safety, and emerging talent.
Key takeaways from the BSRIA Briefing 2024
Not just waiting for Net Zero, but driving it.
The ISO answer to what is a digital twin
Talking about digital twins in a more consistent manner.
Top tips and risks to look out for.
New Code of Practice for fire and escape door hardware
Published by GAI and DHF.
Retrofit of Buildings, a CIOB Technical Publication
Pertinent technical issues, retrofit measures and the roles involved.
New alliance will tackle skills shortage in greater Manchester
The pioneering Electrotechnical Training and Careers Alliance.
Drone data at the edge: three steps to better AI insights
Offering greater accuracy and quicker access to insights.
From fit-out to higher-risk buildings.
Heritage conservation in Calgary
The triple bottom line.
College of West Anglia apprentice wins SkillELECTRIC gold.
Scottish government launch delivery plan
To strengthen planning and tackle the housing emergency.
How people react in ways which tend to restore their comfort.
Comfort is a crucial missing piece of the puzzle.
ECA launches Recharging Electrical Skills Charter in Wales
Best solutions for the industry and electrical skills in Wales.
New homebuilding skills hub launch and industry response
Working with CITB and NHBC to launch fast track training.